10 research outputs found

    Implementing generalized deep-copy in MPI

    Get PDF
    In this paper we introduce a framework for implementing deep copy on top of MPI. The process is initiated by passing just the root object of the dynamic data structure. Our framework takes care of all pointer traversal, communication, copying and reconstruction on receiving nodes. The benefit of our approach is that MPI users can deep copy complex dynamic data structures without the need to write bespoke communication or serialize / deserialize methods for each object. These methods can present a challenging implementation problem that can quickly become unwieldy to maintain when working with complex structured data. This paper demonstrates our generic implementation, which encapsulates both approaches. We analyze the approach with a variety of structures (trees, graphs (including complete graphs) and rings) and demonstrate that it performs comparably to hand written implementations, using a vastly simplified programming interface. We make the source code available completely as a convenient header file

    Quality Assessment and Variance Reduction in Monte Carlo Rendering Algorithms

    Get PDF
    Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit.When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed.When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images.In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value

    Analysis of reported error in Monte Carlo rendered images

    Get PDF
    Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth (GT) is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully the reference image can skew results leading to significant misreporting of error. We present an analysis of error in Monte Carlo rendered images and discuss practices to avoid or be aware of when designing an experiment

    Using Artificial Immune Systems to Sort and Shim Insertion Devices at Diamond Light Source

    No full text
    Presentation delivered by Edward Rial on the Opt-ID software, given at SRI 2021 on the 28th March, 2022.Work on Opt-ID is funded by the Ada Lovelace Centre

    Sampling strategies for learning-based 3D medical image compression

    Get PDF
    Recent achievements of sequence prediction models in numerous domains, including compression, provide great potential for novel learning-based codecs. In such models, the input sequence’s shape and size play a crucial role in learning the mapping function of the data distribution to the target output. This work examines numerous input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16-bit depth) losslessly. The main objective is to determine the optimal practice for enabling the proposed Long Short-Term Memory (LSTM) model to achieve high compression ratio and fast encoding-decoding performance.Our LSTM models are trained with 4-fold cross-validation on 12 high-resolution CT dataset while measuring model’s compression ratios and execution time. Several configurations of sequences have been evaluated, and our results demonstrate that pyramid-shaped sampling represents the best trade-off between performance and compression ratio (up to 3x). We solve a problem of non-deterministic environments that allow our models to run in parallel without much compression performance drop.Experimental evaluation was carried out on datasets acquired by different hospitals, representing different body segments, and distinct scanning modalities (CT and MRI). Our new methodology allows straightforward parallelisation that speeds-up the decoder by up to 37x compared to previous methods. Overall, the trained models demonstrate efficiency and generalisability for compressing 3D medical images losslessly while still outperforming well-known lossless methods by approximately 17% and 12%. To the best of our knowledge, this is the first study that focuses on voxel-wise predictions of volumetric medical imaging for lossless compression

    MedZip: 3D medical images lossless compressor using recurrent neural network (LSTM)

    Get PDF
    As scanners produce higher-resolution and more densely sampled images, this raises the challenge of data storage, transmission and communication within healthcare systems. Since the quality of medical images plays a crucial role in diagnosis accuracy, medical imaging compression techniques are desired to reduce scan bitrate while guaranteeing lossless reconstruction. This paper presents a lossless compression method that integrates a Recurrent Neural Network (RNN) as a 3D sequence prediction model. The aim is to learn the long dependencies of the voxel's neighbourhood in 3D using Long Short-Term Memory (LSTM) network then compress the residual error using arithmetic coding. Experiential results reveal that our method obtains a higher compression ratio achieving 15% saving compared to the state-of-the-art lossless compression standards, including JPEG-LS, JPEG2000, JP3D, HEVC, and PPMd. Our evaluation demonstrates that the proposed method generalizes well to unseen modalities CT and MRI for the lossless compression scheme. To the best of our knowledge, this is the first lossless compression method that uses LSTM neural network for 16-bit volumetric medical image compression

    Lossless compression for volumetric medical images using deep neural network with local sampling

    Get PDF
    Data compression forms a central role in handling the bottleneck of data storage, transmission and processing. Lossless compression requires reducing the file size whilst maintaining bit-perfect decompression, which is the main target in medical applications. This paper presents a novel lossless compression method for 16-bit medical imaging volumes. The aim is to train a neural network (NN) as a 3D data predictor, which minimizes the differences with the original data values and to compress those residuals using arithmetic coding. We evaluate the compression performance of our proposed models to state-of-the-art lossless compression methods, which shows that our approach accomplishes a higher compression ratio in comparison to JPEG-LS, JPEG2000, JP3D, and HEVC and generalizes well
    corecore